63 research outputs found

    Q-NET: A Network for Low-dimensional Integrals of Neural Proxies

    Get PDF
    Many applications require the calculation of integrals of multidimensional functions. A general and popular procedure is to estimate integrals by averaging multiple evaluations of the function. Often, each evaluation of the function entails costly computations. The use of a \emph{proxy} or surrogate for the true function is useful if repeated evaluations are necessary. The proxy is even more useful if its integral is known analytically and can be calculated practically. We propose the use of a versatile yet simple class of artificial neural networks -- sigmoidal universal approximators -- as a proxy for functions whose integrals need to be estimated. We design a family of fixed networks, which we call Q-NETs, that operate on parameters of a trained proxy to calculate exact integrals over \emph{any subset of dimensions} of the input domain. We identify transformations to the input space for which integrals may be recalculated without resampling the integrand or retraining the proxy. We highlight the benefits of this scheme for a few applications such as inverse rendering, generation of procedural noise, visualization and simulation. The proposed proxy is appealing in the following contexts: the dimensionality is low (<10<10D); the estimation of integrals needs to be decoupled from the sampling strategy; sparse, adaptive sampling is used; marginal functions need to be known in functional form; or when powerful Single Instruction Multiple Data/Thread (SIMD/SIMT) pipelines are available for computation.Comment: 11 pages (including appendix and references

    Underwater 3D Structures As Semantic Landmarks in SONAR Mapping

    Get PDF

    Edge-preserving Multiscale Image Decomposition based on Local Extrema

    Get PDF
    We propose a new model for detail that inherently captures oscillations, a key property that distinguishes textures from individual edges. Inspired by techniques in empirical data analysis and morphological image analysis, we use the local extrema of the input image to extract information about oscillations: We define detail as oscillations between local minima and maxima. Building on the key observation that the spatial scale of oscillations are characterized by the density of local extrema, we develop an algorithm for decomposing images into multiple scales of superposed oscillations. Current edge-preserving image decompositions assume image detail to be low contrast variation. Consequently they apply filters that extract features with increasing contrast as successive layers of detail. As a result, they are unable to distinguish between high-contrast, fine-scale features and edges of similar contrast that are to be preserved. We compare our results with existing edge-preserving image decomposition algorithms and demonstrate exciting applications that are made possible by our new notion of detail

    Generating Parametric BRDFs from Natural Language Descriptions

    Full text link
    Artistic authoring of 3D environments is a laborious enterprise that also requires skilled content creators. There have been impressive improvements in using machine learning to address different aspects of generating 3D content, such as generating meshes, arranging geometry, synthesizing textures, etc. In this paper we develop a model to generate Bidirectional Reflectance Distribution Functions (BRDFs) from descriptive textual prompts. BRDFs are four dimensional probability distributions that characterize the interaction of light with surface materials. They are either represented parametrically, or by tabulating the probability density associated with every pair of incident and outgoing angles. The former lends itself to artistic editing while the latter is used when measuring the appearance of real materials. Numerous works have focused on hypothesizing BRDF models from images of materials. We learn a mapping from textual descriptions of materials to parametric BRDFs. Our model is first trained using a semi-supervised approach before being tuned via an unsupervised scheme. Although our model is general, in this paper we specifically generate parameters for MDL materials, conditioned on natural language descriptions, within NVIDIA's Omniverse platform. This enables use cases such as real-time text prompts to change materials of objects in 3D environments such as "dull plastic" or "shiny iron". Since the output of our model is a parametric BRDF, rather than an image of the material, it may be used to render materials using any shape under arbitrarily specified viewing and lighting conditions

    A Theoretical Analysis of Compactness of the Light Transport Operator

    Get PDF
    International audienceRendering photorealistic visuals of virtual scenes requires tractable models for the simulation of light. The rendering equation describes one such model using an integral equation, the crux of which is a continuous integral operator. A majority of rendering algorithms aim to approximate the effect of this light transport operator via discretization (using rays, particles, patches, etc.). Research spanning four decades has uncovered interesting properties and intuition surrounding this operator. In this paper we analyze compactness, a key property that is independent of its discretization and which characterizes the ability to approximate the operator uniformly by a sequence of finite rank operators. We conclusively prove lingering suspicions that this operator is not compact and therefore that any discretization that relies on a finite-rank or nonadaptive finite-bases is susceptible to unbounded error over arbitrary light distributions. Our result justifies the expectation for rendering algorithms to be evaluated using a variety of scenes and illumination conditions. We also discover that its lower dimensional counterpart (over purely diffuse scenes) is not compact except in special cases, and uncover connections with it being noninvertible and acting as a low-pass filter. We explain the relevance of our results in the context of previous work. We believe that our theoretical results will inform future rendering algorithms regarding practical choices.Le rendu d'images photoréalistes de scènes virtuelles nécessite la simulation du transport lumineux. L'équation du rendu décrit un tel modèle à l'aide d'une équation intégrale, ou intervient un opérateur intégral continu. Une part significative des d'algorithmes de rendu visent à approximer l'effet de cet opérateur via une discrétisation (à l'aide de rayons, de particules, de patchs, etc.). Quatre décennies de recherches ont mis à jour des propriétés et une intuition entourant cet opérateur. Dans cet article, nous analysons sa compacité, une propriété clé qui est indépendante de la discrétisation et qui caractérise la possibilité d'approcher uniformément l'opérateur par une suite d'opérateurs de rang fini. Nous justifions les soupçons persistants que cet opérateur n'est pas compact et donc que toute discrétisation qui repose sur un rang fini ou des bases finies non adaptatives n'apporte pas de guarantie d'erreur sur des distributions de lumière arbitraires. Notre résultat justifie le besoin d'évaluer chaque méthode en utilisant une variété de scènes et de conditions d'éclairage. Nous montrons également que son homologue de dimension inférieure (sur des scènes purement diffuses) n'est pas compact sauf dans des cas particuliers, et établissons un lien avec le fait qu'il est non inversible et agit comme un filtre passe-bas. Nous expliquons la pertinence de nos résultats dans le contexte de travaux antérieurs. Nous pensons que nos résultats théoriques éclaireront les futurs algorithmes de rendu concernant les choix pratiques

    Dist2Cycle: A Simplicial Neural Network for Homology Localization

    Get PDF
    Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations between vertices at different resolutions, all at once. This concept is central towards detection of higher dimensional topological features of data, features to which graphs, encoding only pairwise relationships, remain oblivious. While attempts have been made to extend Graph Neural Networks (GNNs) to a simplicial complex setting, the methods do not inherently exploit, or reason about, the underlying topological structure of the network. We propose a graph convolutional model for learning functions parametrized by the kk-homological features of simplicial complexes. By spectrally manipulating their combinatorial kk-dimensional Hodge Laplacians, the proposed model enables learning topological features of the underlying simplicial complexes, specifically, the distance of each kk-simplex from the nearest "optimal" kk-th homology generator, effectively providing an alternative to homology localization.Comment: 9 pages, 5 figure
    corecore